How These Materials Were Made
The course materials for MAM5020F 2026 have been designed, written, and structured with substantial assistance from AI tools — primarily Claude (by Anthropic). This page explains what that means, what steps have been taken to ensure quality, and what you should do if you find an error.
What AI Was Used For
AI tools were used across the creation of these materials, including:
- Content drafting: Initial drafts of lesson text, explanations, and activity descriptions were generated with AI assistance and then reviewed and edited by the course convenor.
- HTML and CSS design: The visual design, page layouts, and responsive styling were created with AI assistance following UCT branding guidelines.
- Research and sourcing: AI tools were used to identify relevant readings, papers, and resources. All sources have been reviewed, but some may have been generated or misattributed by the AI (see below).
- Structuring and organisation: The overall course structure, lesson flow, and activity design were developed collaboratively between the course convenor and AI tools.
What Has Been Done to Ensure Quality
- Human review: All materials have been reviewed and edited by Assoc. Prof. Jonathan Shock before publication. AI-generated content has been checked for accuracy, clarity, and pedagogical appropriateness.
- Citation verification: References and citations have been checked against primary sources. However, given the volume of material, some errors may remain.
- Iterative improvement: Materials are updated throughout the course as errors are identified and new content is developed.
Known Risks
Despite thorough review, AI-generated content can contain subtle errors that are difficult to catch. The most common issues include:
- Hallucinated citations: References that look correct but point to papers that do not exist, or that misattribute findings to the wrong authors or journals.
- Incorrect statistics: Numbers, percentages, or dates that are plausible but inaccurate — sometimes drawn from the AI's training data rather than verified sources.
- Broken links: URLs that were valid at the time of creation but may have changed or expired.
- Oversimplified explanations: Technical concepts that are presented in a way that sacrifices important nuance for clarity.
Why We Are Transparent About This
This course is about generative AI in research. It would be inconsistent — and ironic — to use AI in creating the course materials without being open about it. We believe that:
- Transparency about AI use is an ethical obligation, not just a best practice.
- Seeing how AI was used (and where it falls short) is itself educational for students learning to work with these tools.
- The imperfections in AI-generated content are a feature, not a bug — they demonstrate exactly why the critical evaluation skills taught in this course matter.
A Note on Practicing What We Teach
Week 5 of this course covers the hallucinated citation crisis and teaches students to verify every AI-generated reference. The creation of these very materials encountered exactly this problem — several statistics and citations generated by AI during the drafting process turned out to be incorrect and had to be corrected. This experience directly informed the content of the verification exercises we ask students to complete.
Found an Error?
If you spot a broken link, an incorrect citation, a wrong statistic, or any other error in these materials, please email jonathan.shock@uct.ac.za. Corrections are welcomed and will be made promptly. You are also welcome to open an issue on the course GitHub repository.